69 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Interactive Coding with Constant Round and Communication Blowup

    Get PDF

    Adaptively Secure Coin-Flipping, Revisited

    Full text link
    The full-information model was introduced by Ben-Or and Linial in 1985 to study collective coin-flipping: the problem of generating a common bounded-bias bit in a network of nn players with t=t(n)t=t(n) faults. They showed that the majority protocol can tolerate t=O(n)t=O(\sqrt n) adaptive corruptions, and conjectured that this is optimal in the adaptive setting. Lichtenstein, Linial, and Saks proved that the conjecture holds for protocols in which each player sends a single bit. Their result has been the main progress on the conjecture in the last 30 years. In this work we revisit this question and ask: what about protocols involving longer messages? Can increased communication allow for a larger fraction of faulty players? We introduce a model of strong adaptive corruptions, where in each round, the adversary sees all messages sent by honest parties and, based on the message content, decides whether to corrupt a party (and intercept his message) or not. We prove that any one-round coin-flipping protocol, regardless of message length, is secure against at most O~(n)\tilde{O}(\sqrt n) strong adaptive corruptions. Thus, increased message length does not help in this setting. We then shed light on the connection between adaptive and strongly adaptive adversaries, by proving that for any symmetric one-round coin-flipping protocol secure against tt adaptive corruptions, there is a symmetric one-round coin-flipping protocol secure against tt strongly adaptive corruptions. Returning to the standard adaptive model, we can now prove that any symmetric one-round protocol with arbitrarily long messages can tolerate at most O~(n)\tilde{O}(\sqrt n) adaptive corruptions. At the heart of our results lies a novel use of the Minimax Theorem and a new technique for converting any one-round secure protocol into a protocol with messages of polylog(n)polylog(n) bits. This technique may be of independent interest

    Attacks on the Fiat-Shamir paradigm and program obfuscation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 115-119).The goal of cryptography is to construct *secure* and *efficient* protocols for various tasks. Unfortunately, it is often the case that protocols that are provably secure are not efficient enough for practical use. As a result, most protocols used in practice are *heuristics* that lack proofs of security. These heuristics are typically very efficient and are believed to be secure, though no proof of security has been provided. In this thesis we study the security of some of these popular heuristics. In particular, we focus on two types of heuristics: (1) the Fiat-Shamir heuristic for constructing digital signature schemes, and (2) heuristics for obfuscation. We show that, in some sense, both of these types of heuristics are insecure. Thus, this thesis consists of two parts: (1) The insecuirty of the Fiat-Shamir paradigm: The Fiat-Shamir heuristic provides a general method for transforming secure 3-round public-coin identification schemes into digital signature schemes. The idea of the transformation is to replace the random (second-round) message of the verifier in the identification scheme, with the value of some deterministic hash function evaluated on the first-round message (sent by the prover) and on the message to be signed.(cont.) The Fiat-Shamir methodology for producing digital signature schemes quickly gained popularity both in theory and in practice, as it yields efficient and easy to implement digital signature schemes. The most important question however remained open: are the digital signature schemes produced by the Fiat-Shamir methodology secure? In this thesis, we answer this question negatively. We show that there exist secure 3-round public-coin identification schemes for which the Fiat-Shamir transformation yields *insecure* digital signature schemes for *any* hash function used by the transformation. This is in contrast to the work of Pointcheval and Stern, who proved that the Fiat-Shamir methodology always produces digital signature schemes that are secure against chosen message attacks in the ``Random Oracle Model" -- when the hash function is modeled by a random oracle. (2) The impossibility of obfuscation: The goal of code obfuscation is to make a program completely "unintelligible" while preserving its functionality. Obfuscation has been used for many years in attempts to prevent reverse engineering, e.g ., in copy protection, licensing schemes, and games.(cont.) As a result, many heuristics for obfuscation have emerged, and the important question that remained is: are these heuristics for obfuscation secure? In this thesis, we show that there are many "natural" classes of functions for which obfuscation is not at all possible. This impossibility result holds in an augmentation of the formal obfuscation model of Barak, et al. (2001) that includes auxiliary input. In both of these parts, among other tools, we make new usage of Barak's technique for taking advantage of non black-box access to a program, this time in the context of digital signature schemes and in the context of obfuscation.by Yael Tauman Kalai.Ph.D

    A Framework for Efficient Signatures, Ring Signatures and Identity Based Encryption in the Standard Model

    Get PDF
    In this work, we present a generic framework for constructing efficient signature schemes, ring signature schemes, and identity based encryption schemes, all in the standard model (without relying on random oracles). We start by abstracting the recent work of Hohenberger and Waters (Crypto 2009), and specifically their ``prefix method\u27\u27. We show a transformation taking a signature scheme with a very weak security guarantee (a notion that we call a-priori-message unforgeability under static chosen message attack) and producing a fully secure signature scheme (i.e., existentially unforgeable under adaptive chosen message attack). Our transformation uses the notion of chameleon hash functions, defined by Krawczyk and Rabin (NDSS 2000) and the ``prefix method\u27\u27. Constructing such weakly secure schemes seems to be significantly easier than constructing fully secure ones, and we present {\em simple} constructions based on the RSA assumption, the {\em short integer solution} (SIS) assumption, and the {\em computational Diffie-Hellman} (CDH) assumption over bilinear groups. Next, we observe that this general transformation also applies to the regime of ring signatures. Using this observation, we construct new (provably secure) ring signature schemes: one is based on the {\em short integer solution} (SIS) assumption, and the other is based on the CDH assumption over bilinear groups. As a building block for these constructions, we define a primitive that we call \emph{ring trapdoor functions}. We show that ring trapdoor functions imply ring signatures under a weak definition, which enables us to apply our transformation to achieve full security. Finally, we show a connection between ring signature schemes and identity based encryption (IBE) schemes. Using this connection, and using our new constructions of ring signature schemes, we obtain two IBE schemes: The first is based on the {\em learning with error} (LWE) assumption, and is similar to the recently introduced IBE scheme of Cash-Hofheinz-Kiltz-Peikert; The second is based on the dd-linear assumption over bilinear groups

    SNARGs for Bounded Depth Computations from Sub-Exponential LWE

    Get PDF
    We construct a succinct non-interactive publicly-verifiable delegation scheme for any log-space uniform circuit under the sub-exponential LWE\mathsf{LWE} assumption, a standard assumption that is believed to be post-quantum secure. For a circuit of size SS and depth DD, the prover runs in time poly(S)(S), and the verifier runs in time (D+n)So(1)(D + n) \cdot S^{o(1)}, where nn is the input size. We obtain this result by slightly modifying the GKR\mathsf{GKR} protocol and proving that the Fiat-Shamir heuristic is sound when applied to this modified protocol. We build on the recent works of Canetti et al. (STOC 2019) and Peikert and Shiehian (Crypto 2020), which prove the soundness of the Fiat-Shamir heuristic when applied to a specific (non-succinct) zero-knowledge protocol. As a corollary, by the work of Choudhuri et al. (STOC 2019), this implies that the complexity class PPAD\mathsf{PPAD} is hard (on average) under the sub-exponential LWE\mathsf{LWE} assumption, assuming that #SAT\mathsf{\#SAT} with o(lognloglogn)o(\log n \cdot \log\log n) variables is hard (on average)

    A Parallel Repetition Theorem for Leakage Resilience

    Get PDF
    A leakage resilient encryption scheme is one which stays secure even against an attacker that obtains a bounded amount of side information on the secret key (say λ\lambda bits of ``leakage\u27\u27). A fundamental question is whether parallel repetition amplifies leakage resilience. Namely, if we secret share our message, and encrypt the shares under two independent keys, will the resulting scheme be resilient to 2λ2\lambda bits of leakage? Surprisingly, Lewko and Waters (FOCS 2010) showed that this is false. They gave an example of a public-key encryption scheme that is (CPA) resilient to λ\lambda bits of leakage, and yet its 22-repetition is not resilient to even (1+ϵ)λ(1+\epsilon)\lambda bits of leakage. In their counter-example, the repeated schemes share secretly generated public parameters. In this work, we show that under a reasonable strengthening of the definition of leakage resilience (one that captures known proof techniques for achieving non-trivial leakage resilience), parallel repetition \emph{does} in fact amplify leakage (for CPA security). In particular, if fresh public parameters are used for each copy of the Lewko-Waters scheme, then their negative result does not hold, and leakage is amplified by parallel repetition. More generally, we show that given tt schemes that are resilient to λ1,,λt\lambda_1, \ldots, \lambda_t bits of leakage, respectfully, their direct product is resilient to (λi1)\sum (\lambda_i-1) bits. We present our amplification theorem in a general framework that applies other cryptographic primitives as well
    corecore